We address the theoretical and practical problems related to the trajectory generation and tracking control of tail-sitter UAVs. Theoretically, we focus on the differential flatness property with full exploitation of actual UAV aerodynamic models, which lays a foundation for generating dynamically feasible trajectory and achieving high-performance tracking control. We have found that a tail-sitter is differentially flat with accurate aerodynamic models within the entire flight envelope, by specifying coordinate flight condition and choosing the vehicle position as the flat output. This fundamental property allows us to fully exploit the high-fidelity aerodynamic models in the trajectory planning and tracking control to achieve accurate tail-sitter flights. Particularly, an optimization-based trajectory planner for tail-sitters is proposed to design high-quality, smooth trajectories with consideration of kinodynamic constraints, singularity-free constraints and actuator saturation. The planned trajectory of flat output is transformed to state trajectory in real-time with consideration of wind in environments. To track the state trajectory, a global, singularity-free, and minimally-parameterized on-manifold MPC is developed, which fully leverages the accurate aerodynamic model to achieve high-accuracy trajectory tracking within the whole flight envelope. The effectiveness of the proposed framework is demonstrated through extensive real-world experiments in both indoor and outdoor field tests, including agile SE(3) flight through consecutive narrow windows requiring specific attitude and with speed up to 10m/s, typical tail-sitter maneuvers (transition, level flight and loiter) with speed up to 20m/s, and extremely aggressive aerobatic maneuvers (Wingover, Loop, Vertical Eight and Cuban Eight) with acceleration up to 2.5g.
translated by 谷歌翻译
The emergence of low-cost, small form factor and light-weight solid-state LiDAR sensors have brought new opportunities for autonomous unmanned aerial vehicles (UAVs) by advancing navigation safety and computation efficiency. Yet the successful developments of LiDAR-based UAVs must rely on extensive simulations. Existing simulators can hardly perform simulations of real-world environments due to the requirements of dense mesh maps that are difficult to obtain. In this paper, we develop a point-realistic simulator of real-world scenes for LiDAR-based UAVs. The key idea is the underlying point rendering method, where we construct a depth image directly from the point cloud map and interpolate it to obtain realistic LiDAR point measurements. Our developed simulator is able to run on a light-weight computing platform and supports the simulation of LiDARs with different resolution and scanning patterns, dynamic obstacles, and multi-UAV systems. Developed in the ROS framework, the simulator can easily communicate with other key modules of an autonomous robot, such as perception, state estimation, planning, and control. Finally, the simulator provides 10 high-resolution point cloud maps of various real-world environments, including forests of different densities, historic building, office, parking garage, and various complex indoor environments. These realistic maps provide diverse testing scenarios for an autonomous UAV. Evaluation results show that the developed simulator achieves superior performance in terms of time and memory consumption against Gazebo and that the simulated UAV flights highly match the actual one in real-world environments. We believe such a point-realistic and light-weight simulator is crucial to bridge the gap between UAV simulation and experiments and will significantly facilitate the research of LiDAR-based autonomous UAVs in the future.
translated by 谷歌翻译
在本文中,我们解决了未知和非结构化环境中在线四型全身运动计划(SE(3)计划)的问题。我们提出了一种新颖的多分辨率搜索方法,该方法发现了需要完整的姿势计划和仅需要位置计划的正常区域的狭窄区域。结果,将四型计划问题分解为几个SE(3)(如有必要)和R^3子问题。为了飞过发现的狭窄区域,提出了一个精心设计的狭窄区域的走廊生成策略,这大大提高了计划的成功率。总体问题分解和分层计划框架大大加速了计划过程,使得可以在未知环境中进行完全的板载感应和计算在线工作。广泛的仿真基准比较表明,所提出的方法的数量级比计算时间中最先进的方法快,同时保持高计划成功率。最终将所提出的方法集成到基于激光雷达的自主四旋转器中,并在未知和非结构化环境中进行了各种现实世界实验,以证明该方法的出色性能。
translated by 谷歌翻译
准确的自我和相对状态估计是完成群体任务的关键前提,例如协作自主探索,目标跟踪,搜索和救援。本文提出了一种全面分散的状态估计方法,用于空中群体系统,其中每个无人机执行精确的自我状态估计,通过无线通信交换自我状态和相互观察信息,并估算相对状态(W.R.T.)(W.R.T.)无人机,全部实时,仅基于激光惯性测量。提出了一种基于3D激光雷达的新型无人机检测,识别和跟踪方法,以获得队友无人机的观察。然后,将相互观察测量与IMU和LIDAR测量紧密耦合,以实时和准确地估计自我状态和相对状态。广泛的现实世界实验显示了对复杂场景的广泛适应性,包括被GPS贬低的场景,摄影机的退化场景(漆黑的夜晚)或激光雷达(面对单个墙)。与运动捕获系统提供的地面真相相比,结果显示了厘米级的定位精度,该精度优于单个无人机系统的其他最先进的激光惯性射测。
translated by 谷歌翻译
四型是敏捷平台。对于人类专家,他们可以在混乱的环境中进行极高的高速航班。但是,高速自主飞行仍然是一个重大挑战。在这项工作中,我们提出了一种基于走廊约束的最小控制工作轨迹优化(MINCO)框架的运动计划算法。具体而言,我们使用一系列重叠球来表示环境的自由空间,并提出了两种新型设计,使算法能够实时计划高速四轨轨迹。一种是一种基于采样的走廊生成方法,该方法在两个相邻球之间生成具有大型重叠区域(因此总走廊大小)的球体。第二个是一个后退的地平线走廊(RHC)策略,其中部分生成的走廊在每个补给中都重复使用。这两种设计一起,根据四极管的当前状态扩大走廊的空间,因此使四极管可以高速操纵。我们根据其他最先进的计划方法基准了我们的算法,以显示其在模拟中的优势。还进行了全面的消融研究,以显示这两种设计的必要性。最终在木材环境中对自动激光雷达四型二次无人机进行了评估,该方法的飞行速度超过13.7 m/s,而没有任何先前的环境或外部定位设施图。
translated by 谷歌翻译
对于大多数LIDAR惯性进程,精确的初始状态,包括LiDAR和6轴IMU之间的时间偏移和外部转换,起着重要作用,通常被视为先决条件。但是,这种信息可能不会始终在定制的激光惯性系统中获得。在本文中,我们提出了liinit:一个完整​​的实时激光惯性系统初始化过程,该过程校准了激光雷达和imus之间的时间偏移和外部参数,以及通过对齐从激光雷达估计的状态来校准重力矢量和IMU偏置通过IMU测量的测量。我们将提出的方法实现为初始化模块,如果启用了,该模块会自动检测到收集的数据的激发程度并校准,即直接偏移,外部偏移,外部,重力向量和IMU偏置,然后是这样的。用作实时激光惯性射测系统的高质量初始状态值。用不同类型的LIDAR和LIDAR惯性组合进行的实验表明我们初始化方法的鲁棒性,适应性和效率。我们的LIDAR惯性初始化过程LIINIT和测试数据的实现在GitHub上开源,并集成到最先进的激光辐射射击轨道测定系统FastLiO2中。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
This paper presents a practical global optimization algorithm for the K-center clustering problem, which aims to select K samples as the cluster centers to minimize the maximum within-cluster distance. This algorithm is based on a reduced-space branch and bound scheme and guarantees convergence to the global optimum in a finite number of steps by only branching on the regions of centers. To improve efficiency, we have designed a two-stage decomposable lower bound, the solution of which can be derived in a closed form. In addition, we also propose several acceleration techniques to narrow down the region of centers, including bounds tightening, sample reduction, and parallelization. Extensive studies on synthetic and real-world datasets have demonstrated that our algorithm can solve the K-center problems to global optimal within 4 hours for ten million samples in the serial mode and one billion samples in the parallel mode. Moreover, compared with the state-of-the-art heuristic methods, the global optimum obtained by our algorithm can averagely reduce the objective function by 25.8% on all the synthetic and real-world datasets.
translated by 谷歌翻译
Score-based diffusion models have captured widespread attention and funded fast progress of recent vision generative tasks. In this paper, we focus on diffusion model backbone which has been much neglected before. We systematically explore vision Transformers as diffusion learners for various generative tasks. With our improvements the performance of vanilla ViT-based backbone (IU-ViT) is boosted to be on par with traditional U-Net-based methods. We further provide a hypothesis on the implication of disentangling the generative backbone as an encoder-decoder structure and show proof-of-concept experiments verifying the effectiveness of a stronger encoder for generative tasks with ASymmetriC ENcoder Decoder (ASCEND). Our improvements achieve competitive results on CIFAR-10, CelebA, LSUN, CUB Bird and large-resolution text-to-image tasks. To the best of our knowledge, we are the first to successfully train a single diffusion model on text-to-image task beyond 64x64 resolution. We hope this will motivate people to rethink the modeling choices and the training pipelines for diffusion-based generative models.
translated by 谷歌翻译
Deep learning-based methods have achieved significant performance for image defogging. However, existing methods are mainly developed for land scenes and perform poorly when dealing with overwater foggy images, since overwater scenes typically contain large expanses of sky and water. In this work, we propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes. To promote the recovery of the objects on water in the image, two loss functions are exploited for the network where a prior map is designed to invert the dark channel and the min-max normalization is used to suppress the sky and emphasize objects. However, due to the unpaired training set, the network may learn an under-constrained domain mapping from foggy to fog-free image, leading to artifacts and loss of details. Thus, we propose an intuitive Upscaling Inception Module (UIM) and a Long-range Residual Coarse-to-fine framework (LRC) to mitigate this issue. Extensive experiments on qualitative and quantitative comparisons demonstrate that the proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
translated by 谷歌翻译